光学相干断层扫描(OCT)是一种非侵入性的3D模态,广泛用于视网膜的眼科。在OCT上实现自动化的解剖学视网膜层分割对于检测和监测不同视网膜疾病(如年龄相关的黄斑病(AMD)或糖尿病性视网膜病)很重要。但是,大多数最先进的层分割方法基于纯监督的深度学习,需要大量的像素级注释数据,这些数据昂贵且难以获得。考虑到这一点,我们将半监督的范式介绍到视网膜层分割任务中,该任务利用大规模未标记数据集中存在的信息以及解剖学先验。特别是,一种新型的完全可区分的方法用于将表面位置回归转换为像素结构化分割,从而使以耦合方式同时使用1D表面和2D层表示来训练模型。特别是,这些2D分割被用作解剖因素,与学习的样式因子一起组成了用于重建输入图像的分离表示。同时,我们建议一组解剖学先验,以改善有限的标记数据时,可以改善网络训练。我们在使用中间和湿amd的现实世界中的扫描数据集上证明了我们的方法在使用我们的完整训练集时优于最先进带有标记数据的一部分。
translated by 谷歌翻译
乳腺癌是女性最常见的恶性肿瘤,每年负责超过50万人死亡。因此,早期和准确的诊断至关重要。人类专业知识是诊断和正确分类乳腺癌并定义适当的治疗,这取决于评价不同生物标志物如跨膜蛋白受体HER2的表达。该评估需要几个步骤,包括免疫组织化学或原位杂交等特殊技术,以评估HER2状态。通过降低诊断中的步骤和人类偏差的次数的目标,赫洛挑战是组织的,作为第16届欧洲数字病理大会的并行事件,旨在自动化仅基于苏木精和曙红染色的HER2地位的评估侵袭性乳腺癌的组织样本。评估HER2状态的方法是在全球21个团队中提出的,并通过一些提议的方法实现了潜在的观点,以推进最先进的。
translated by 谷歌翻译
The field of robotics, and more especially humanoid robotics, has several established competitions with research oriented goals in mind. Challenging the robots in a handful of tasks, these competitions provide a way to gauge the state of the art in robotic design, as well as an indicator for how far we are from reaching human performance. The most notable competitions are RoboCup, which has the long-term goal of competing against a real human team in 2050, and the FIRA HuroCup league, in which humanoid robots have to perform tasks based on actual Olympic events. Having robots compete against humans under the same rules is a challenging goal, and, we believe that it is in the sport of archery that humanoid robots have the most potential to achieve it in the near future. In this work, we perform a first step in this direction. We present a humanoid robot that is capable of gripping, drawing and shooting a recurve bow at a target 10 meters away with considerable accuracy. Additionally, we show that it is also capable of shooting distances of over 50 meters.
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Bi-encoders and cross-encoders are widely used in many state-of-the-art retrieval pipelines. In this work we study the generalization ability of these two types of architectures on a wide range of parameter count on both in-domain and out-of-domain scenarios. We find that the number of parameters and early query-document interactions of cross-encoders play a significant role in the generalization ability of retrieval models. Our experiments show that increasing model size results in marginal gains on in-domain test sets, but much larger gains in new domains never seen during fine-tuning. Furthermore, we show that cross-encoders largely outperform bi-encoders of similar size in several tasks. In the BEIR benchmark, our largest cross-encoder surpasses a state-of-the-art bi-encoder by more than 4 average points. Finally, we show that using bi-encoders as first-stage retrievers provides no gains in comparison to a simpler retriever such as BM25 on out-of-domain tasks. The code is available at https://github.com/guilhermemr04/scaling-zero-shot-retrieval.git
translated by 谷歌翻译
Besides accuracy, recent studies on machine learning models have been addressing the question on how the obtained results can be interpreted. Indeed, while complex machine learning models are able to provide very good results in terms of accuracy even in challenging applications, it is difficult to interpret them. Aiming at providing some interpretability for such models, one of the most famous methods, called SHAP, borrows the Shapley value concept from game theory in order to locally explain the predicted outcome of an instance of interest. As the SHAP values calculation needs previous computations on all possible coalitions of attributes, its computational cost can be very high. Therefore, a SHAP-based method called Kernel SHAP adopts an efficient strategy that approximate such values with less computational effort. In this paper, we also address local interpretability in machine learning based on Shapley values. Firstly, we provide a straightforward formulation of a SHAP-based method for local interpretability by using the Choquet integral, which leads to both Shapley values and Shapley interaction indices. Moreover, we also adopt the concept of $k$-additive games from game theory, which contributes to reduce the computational effort when estimating the SHAP values. The obtained results attest that our proposal needs less computations on coalitions of attributes to approximate the SHAP values.
translated by 谷歌翻译
在机器学习中,使用算法 - 不足的方法是一个新兴领域,用于解释单个特征对预测结果的贡献。尽管重点放在解释预测本身上,但已经做了一些解释这些模型的鲁棒性,即每个功能如何有助于实现这种鲁棒性。在本文中,我们建议使用沙普利值来解释每个特征对模型鲁棒性的贡献,该功能以接收器操作特性(ROC)曲线和ROC曲线(AUC)下的面积来衡量。在一个说明性示例的帮助下,我们证明了解释ROC曲线的拟议思想,并可以看到这些曲线中的不确定性。对于不平衡的数据集,使用Precision-Recall曲线(PRC)被认为更合适,因此我们还演示了如何借助Shapley值解释PRC。
translated by 谷歌翻译
通用近似定理断言,单个隐藏层神经网络在紧凑型集合上具有任何所需的精度,可以近似连续函数。作为存在的结果,通用近似定理支持在各种应用程序中使用神经网络,包括回归和分类任务。通用近似定理不仅限于实现的神经网络,而且还具有复杂,季节,Tessarines和Clifford值的神经网络。本文扩展了广泛的超复杂性神经网络的通用近似定理。确切地说,我们首先介绍非分类超复杂代数的概念。复数,偶数和苔丝是非分类超复合代数的示例。然后,我们陈述了在非分类代数上定义的超复合值的神经网络的通用近似定理。
translated by 谷歌翻译
我们介绍MR-NET,这是一种用于多分辨率神经网络的一般体系结构,也是基于此体系结构进行成像应用的框架。我们的基于坐标的网络在空间和规模上都是连续的,因为它们由多个阶段组成,这些阶段逐渐增加了更细节。除此之外,它们是一个紧凑而有效的表示。我们展示了多分辨率图像表示以及用于纹理放大和缩小以及抗脉化的应用。
translated by 谷歌翻译
主成分分析(PCA)是信号处理中无处不在的维度降低技术,搜索一个投影矩阵,该矩阵最小化了还原数据集和原始数据集之间的平方误差。由于经典的PCA并非量身定制用于解决与公平性有关的问题,因此其对实际问题的应用可能会导致不同群体的重建错误(例如,男人和女人,白人和黑人等)的差异,并带来可能有害的后果,例如引入偏见对敏感群体。尽管最近提出了几种公平的PCA版本,但在搜索算法中仍然存在基本差距,这些算法足够简单,可以部署在实际系统中。为了解决这个问题,我们提出了一种新颖的PCA算法,该算法通过一个简单的策略来解决公平问题,该策略包括一维搜索,该搜索利用了PCA的封闭形式解决方案。如数值实验所证明的那样,该提案可以通过总体重建误差的损失很小,而无需诉诸复杂的优化方案,从而显着提高公平性。此外,我们的发现在几种真实情况以及在具有不平衡和平衡数据集的情况下是一致的。
translated by 谷歌翻译